Dual discriminator adversarial distillation for data-free model compression

نویسندگان

چکیده

Knowledge distillation has been widely used to produce portable and efficient neural networks which can be well applied on edge devices for computer vision tasks. However, almost all top-performing knowledge methods need access the original training data, usually a huge size is often unavailable. To tackle this problem, we propose novel data-free approach in paper, named Dual Discriminator Adversarial Distillation (DDAD) distill network without of any data or meta-data. specific, use generator create samples through dual discriminator adversarial distillation, mimics data. The not only uses pre-trained teacher’s intrinsic statistics existing batch normalization layers but also obtains maximum discrepancy from student model. Then generated are train compact under supervision teacher. proposed method an closely approximates its teacher network, using Extensive experiments conducted demonstrate effectiveness CIFAR, Caltech101 ImageNet datasets classification Moreover, extend our semantic segmentation tasks several public such as CamVid, NYUv2, Cityscapes VOC 2012. best knowledge, first work generative model based large-scale ImageNet, Experiments show that outperforms baselines distillation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Dual Discriminator Generative Adversarial Nets

We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statist...

متن کامل

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from...

متن کامل

Model compression via distillation and quantization

Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, wh...

متن کامل

Data-Free Knowledge Distillation for Deep Neural Networks

Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose relea...

متن کامل

Adversarial Network Compression

Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the nee...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Machine Learning and Cybernetics

سال: 2021

ISSN: ['1868-8071', '1868-808X']

DOI: https://doi.org/10.1007/s13042-021-01443-0